28 research outputs found
Recommended from our members
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning
Using a model heat engine, we show that neural network-based reinforcement
learning can identify thermodynamic trajectories of maximal efficiency. We
consider both gradient and gradient-free reinforcement learning. We use an
evolutionary learning algorithm to evolve a population of neural networks,
subject to a directive to maximize the efficiency of a trajectory composed of a
set of elementary thermodynamic processes; the resulting networks learn to
carry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given
an additional irreversible process, this evolutionary scheme learns a
previously unknown thermodynamic cycle. Gradient-based reinforcement learning
is able to learn the Stirling cycle, whereas an evolutionary approach achieves
the optimal Carnot cycle. Our results show how the reinforcement learning
strategies developed for game playing can be applied to solve physical problems
conditioned upon path-extensive order parameters
Electrochromic properties of a poly(dithienylfuran) derivative featuring a redox-active dithiin unit
A teraryl monomer containing a 1,4-dithiin-furan central unit has been synthesised and characterised by single crystal X-ray crystallography. The di(thienyl)furan monomer 11 was successfully polymerised electrochemically and shown to possess a lower electrochemical band gap than its terthiophene analogue (1.97 eV cf. 2.11 eV). The electrochromic properties of this polymer proved to be superior to PEDOT, with fast switching and reversible colour transformation at high colour contrast (CE = 212 cm(2) C-1 cf. 183 cm(2) C-1 for PEDOT at 95% optical switch)
A blood atlas of COVID-19 defines hallmarks of disease severity and specificity.
Treatment of severe COVID-19 is currently limited by clinical heterogeneity and incomplete description of specific immune biomarkers. We present here a comprehensive multi-omic blood atlas for patients with varying COVID-19 severity in an integrated comparison with influenza and sepsis patients versus healthy volunteers. We identify immune signatures and correlates of host response. Hallmarks of disease severity involved cells, their inflammatory mediators and networks, including progenitor cells and specific myeloid and lymphocyte subsets, features of the immune repertoire, acute phase response, metabolism, and coagulation. Persisting immune activation involving AP-1/p38MAPK was a specific feature of COVID-19. The plasma proteome enabled sub-phenotyping into patient clusters, predictive of severity and outcome. Systems-based integrative analyses including tensor and matrix decomposition of all modalities revealed feature groupings linked with severity and specificity compared to influenza and sepsis. Our approach and blood atlas will support future drug development, clinical trial design, and personalized medicine approaches for COVID-19
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning
Using a model heat engine, we show that neural network-based reinforcement
learning can identify thermodynamic trajectories of maximal efficiency. We
consider both gradient and gradient-free reinforcement learning. We use an
evolutionary learning algorithm to evolve a population of neural networks,
subject to a directive to maximize the efficiency of a trajectory composed of a
set of elementary thermodynamic processes; the resulting networks learn to
carry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given
an additional irreversible process, this evolutionary scheme learns a
previously unknown thermodynamic cycle. Gradient-based reinforcement learning
is able to learn the Stirling cycle, whereas an evolutionary approach achieves
the optimal Carnot cycle. Our results show how the reinforcement learning
strategies developed for game playing can be applied to solve physical problems
conditioned upon path-extensive order parameters.Comment: 11 pages, 5 figure
Recommended from our members
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning
Using a model heat engine, we show that neural network-based reinforcement
learning can identify thermodynamic trajectories of maximal efficiency. We
consider both gradient and gradient-free reinforcement learning. We use an
evolutionary learning algorithm to evolve a population of neural networks,
subject to a directive to maximize the efficiency of a trajectory composed of a
set of elementary thermodynamic processes; the resulting networks learn to
carry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given
an additional irreversible process, this evolutionary scheme learns a
previously unknown thermodynamic cycle. Gradient-based reinforcement learning
is able to learn the Stirling cycle, whereas an evolutionary approach achieves
the optimal Carnot cycle. Our results show how the reinforcement learning
strategies developed for game playing can be applied to solve physical problems
conditioned upon path-extensive order parameters
Recommended from our members
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning.
Using a model heat engine, we show that neural-network-based reinforcement learning can identify thermodynamic trajectories of maximal efficiency. We consider both gradient and gradient-free reinforcement learning. We use an evolutionary learning algorithm to evolve a population of neural networks, subject to a directive to maximize the efficiency of a trajectory composed of a set of elementary thermodynamic processes; the resulting networks learn to carry out the maximally efficient Carnot, Stirling, or Otto cycles. When given an additional irreversible process, this evolutionary scheme learns a previously unknown thermodynamic cycle. Gradient-based reinforcement learning is able to learn the Stirling cycle, whereas an evolutionary approach achieves the optimal Carnot cycle. Our results show how the reinforcement learning strategies developed for game playing can be applied to solve physical problems conditioned upon path-extensive order parameters
Self-assembly of halogen adducts of ester and carboxylic acid functionalised 1,3-dithiole-2-thiones
New halogen adducts of 1,3-dithiole-2-thione-4-carboxylic acid and dimethyl 1,3-dithiole-2-thione-4,5-dicarboxylate have been prepared and characterised by X-ray crystallographic studies. The adducts feature an array of intramolecular and intermolecular close contacts involving chalcogen-chalcogen, chalcogen-halogen and halogen-halogen interactions. Whilst these contacts are derived from the basic heterocyclic unit and the coordinated halogen molecules, the ester and acid functionalities provide a varied and rich array of hydrogen bonding motifs, which significantly enhance the long-range order of the supramolecular structures